Current Virtual Reality (VR) environments lack the rich haptic signals that humans experience during real-life interactions, such as the sensation of texture during lateral movement on a surface. Adding realistic haptic textures to VR environments requires a model that generalizes to variations of a user's interaction and to the wide variety of existing textures in the world. Current methodologies for haptic texture rendering exist, but they usually develop one model per texture, resulting in low scalability. We present a deep learning-based action-conditional model for haptic texture rendering and evaluate its perceptual performance in rendering realistic texture vibrations through a multi part human user study. This model is unified over all materials and uses data from a vision-based tactile sensor (GelSight) to render the appropriate surface conditioned on the user's action in real time. For rendering texture, we use a high-bandwidth vibrotactile transducer attached to a 3D Systems Touch device. The result of our user study shows that our learning-based method creates high-frequency texture renderings with comparable or better quality than state-of-the-art methods without the need for learning a separate model per texture. Furthermore, we show that the method is capable of rendering previously unseen textures using a single GelSight image of their surface.
translated by 谷歌翻译
Solving real-world sequential manipulation tasks requires robots to have a repertoire of skills applicable to a wide range of circumstances. To acquire such skills using data-driven approaches, we need massive and diverse training data which is often labor-intensive and non-trivial to collect and curate. In this work, we introduce Active Task Randomization (ATR), an approach that learns visuomotor skills for sequential manipulation by automatically creating feasible and novel tasks in simulation. During training, our approach procedurally generates tasks using a graph-based task parameterization. To adaptively estimate the feasibility and novelty of sampled tasks, we develop a relational neural network that maps each task parameter into a compact embedding. We demonstrate that our approach can automatically create suitable tasks for efficiently training the skill policies to handle diverse scenarios with a variety of objects. We evaluate our method on simulated and real-world sequential manipulation tasks by composing the learned skills using a task planner. Compared to baseline methods, the skills learned using our approach consistently achieve better success rates.
translated by 谷歌翻译
Multi-object tracking is a cornerstone capability of any robotic system. Most approaches follow a tracking-by-detection paradigm. However, within this framework, detectors function in a low precision-high recall regime, ensuring a low number of false-negatives while producing a high rate of false-positives. This can negatively affect the tracking component by making data association and track lifecycle management more challenging. Additionally, false-negative detections due to difficult scenarios like occlusions can negatively affect tracking performance. Thus, we propose a method that learns shape and spatio-temporal affinities between consecutive frames to better distinguish between true-positive and false-positive detections and tracks, while compensating for false-negative detections. Our method provides a probabilistic matching of detections that leads to robust data association and track lifecycle management. We quantitatively evaluate our method through ablative experiments and on the nuScenes tracking benchmark where we achieve state-of-the-art results. Our method not only estimates accurate, high-quality tracks but also decreases the overall number of false-positive and false-negative tracks. Please see our project website for source code and demo videos: sites.google.com/view/shasta-3d-mot/home.
translated by 谷歌翻译
When humans perform contact-rich manipulation tasks, customized tools are often necessary and play an important role in simplifying the task. For instance, in our daily life, we use various utensils for handling food, such as knives, forks and spoons. Similarly, customized tools for robots may enable them to more easily perform a variety of tasks. Here, we present an end-to-end framework to automatically learn tool morphology for contact-rich manipulation tasks by leveraging differentiable physics simulators. Previous work approached this problem by introducing manually constructed priors that required detailed specification of object 3D model, grasp pose and task description to facilitate the search or optimization. In our approach, we instead only need to define the objective with respect to the task performance and enable learning a robust morphology by randomizing the task variations. The optimization is made tractable by casting this as a continual learning problem. We demonstrate the effectiveness of our method for designing new tools in several scenarios such as winding ropes, flipping a box and pushing peas onto a scoop in simulation. We also validate that the shapes discovered by our method help real robots succeed in these scenarios.
translated by 谷歌翻译
多任务学习的最新研究揭示了解决单个神经网络中相关问题的好处。 3D对象检测和多对象跟踪(MOT)是两个严重的相互交织的问题,可以预测并关联整个时间的对象实例位置。但是,3D MOT中的大多数先前作品都将检测器视为先前的分离管道,不一致地将检测器的输出作为跟踪器的输入。在这项工作中,我们提出了Minkowski Tracker,这是一种稀疏的时空R-CNN,可以共同解决对象检测和跟踪。受基于区域的CNN(R-CNN)的启发,我们建议将跟踪作为对象检测器R-CNN的第二阶段,该跟踪预测了轨道的分配概率。首先,Minkowski Tracker将4D点云作为输入,以生成时空鸟的视图(BEV)特征通过4D稀疏卷积编码器网络。然后,我们提出的TrackAlign聚集了BEV功能的轨道区域(ROI)功能。最后,Minkowski Tracker根据ROI功能预测的检测到追踪匹配概率更新了跟踪及其置信得分。我们在大规模实验中显示,我们方法的总体性能增益是由于四个因素:1。4D编码器的时间推理提高了检测性能2.对象检测的多任务学习和MOT共同增强了彼此3.检测到轨道比赛得分学习隐式运动模型以增强轨道分配4.检测到轨道匹配分数提高了轨道置信度得分的质量。结果,Minkowski Tracker在没有手工设计的运动模型的情况下实现了Nuscenes数据集跟踪任务上的最新性能。
translated by 谷歌翻译
抓握是通过在一组触点上施加力和扭矩来挑选对象的过程。深度学习方法的最新进展允许在机器人对象抓地力方面快速进步。我们在过去十年中系统地调查了出版物,特别感兴趣使用最终效果姿势的所有6度自由度抓住对象。我们的综述发现了四种用于机器人抓钩的常见方法:基于抽样的方法,直接回归,强化学习和示例方法。此外,我们发现了围绕抓握的两种“支持方法”,这些方法使用深入学习来支持抓握过程,形状近似和负担能力。我们已经将本系统评论(85篇论文)中发现的出版物提炼为十个关键要点,我们认为对未来的机器人抓握和操纵研究至关重要。该调查的在线版本可从https://rhys-newbury.github.io/projects/6dof/获得
translated by 谷歌翻译
可区分的仿真是用于基于快速梯度的策略优化和系统识别的有前途的工具包。但是,现有的可区分仿真方法在很大程度上已经解决了获得平滑梯度相对容易的方案,例如具有光滑动力学的系统。在这项工作中,我们研究了可区分的模拟所面临的挑战,当时单个下降不可行,这通常是全球最佳的,这通常是接触率丰富的方案中的问题。我们分析包含刚体和可变形物体的各种情况的优化景观。在具有高度可变形的物体和流体的动态环境中,可区分的模拟器在空间的某些地方生产具有有用梯度的坚固景观。我们提出了一种将贝叶斯优化与半本地“飞跃”相结合的方法,以获得可以有效使用梯度的全局搜索方法,同时还可以在具有嘈杂梯度的地区保持稳健的性能。我们表明,我们的方法在模拟中的一组实验集上优于几个基于梯度和无梯度的基线,并且还使用具有真实机器人和变形物的实验验证该方法。视频和补充材料可从https://tinyurl.com/globdiff获得
translated by 谷歌翻译
可变形的物体操纵仍然是机器人研究中的具有挑战性的任务。用于参数推断和状态估计的传统技术通常依赖于状态空间的精确定义及其动态。虽然这适用于刚性物体和机器人状态,但定义可变形物体的状态空间并如何及时演变。在这项工作中,我们构成了作为用模拟器定义的概率推断任务的可变形对象的物理参数的问题。我们提出了一种用于通过技术从图像序列提取状态信息的新方法,以将可变形对象作为分布嵌入的状态提取。这允许以原则的方式将噪声状态观察直接进入基于现代贝叶斯模拟的推理工具。我们的实验证实,我们可以估计物理性质的后部分布,例如高可变形物体的弹性,摩擦和尺度,例如布和绳索。总的来说,我们的方法解决了概率的实际问题,并有助于更好地代表可变形对象状态的演变。
translated by 谷歌翻译
神经辐射场(NERF)最近被成为自然,复杂3D场景的代表的强大范例。 NERFS表示神经网络中的连续体积密度和RGB值,并通过射线跟踪从看不见的相机观点生成照片逼真图像。我们提出了一种算法,用于通过仅使用用于本地化的板载RGB相机表示为NERF的3D环境导航机器人。我们假设现场的NERF已经预先训练了离线,机器人的目标是通过NERF中的未占用空间导航到目标姿势。我们介绍了一种轨迹优化算法,其避免了基于NERF中的高密度区域的碰撞,其基于差分平整度的离散时间版本,其可用于约束机器人的完整姿势和控制输入。我们还介绍了基于优化的过滤方法,以估计单位的RGB相机中的NERF中机器人的6dof姿势和速度。我们将轨迹策划器与在线重新循环中的姿势过滤器相结合,以提供基于视觉的机器人导航管道。我们使用丛林健身房环境,教堂内部和巨石阵线导航的四轮车机器人,使用RGB相机展示仿真结果。我们还展示了通过教会导航的全向地面机器人,要求它重新定位以缩小差距。这项工作的视频可以在https://mikh3x4.github.io/nerf-navigation/找到。
translated by 谷歌翻译
AI正在经历范式转变,随着模型的兴起(例如Bert,Dall-E,GPT-3),这些模型经过大规模的数据训练,并且可以适应广泛的下游任务。我们称这些模型基础模型来强调其至关重要但不完整的特征。该报告提供了基础模型的机会和风险的详尽说明,包括其功能(例如语言,愿景,机器人技术,推理,人类互动)和技术原则(例如,模型架构,培训程序,数据,系统,安全,安全性,评估,理论)对其应用(例如法律,医疗保健,教育)和社会影响(例如不平等,滥用,经济和环境影响,法律和道德考虑)。尽管基础模型基于标准的深度学习和转移学习,但它们的规模导致了新的新兴能力,以及它们在许多任务中的有效性都激发了同质化。同质化提供了强大的杠杆作用,但要求谨慎,因为基础模型的缺陷均由下游的所有适应模型继承。尽管即将广泛地部署基础模型,但我们目前对它们的工作方式,失败以及由于其新兴属性的影响而缺乏清晰的了解。为了解决这些问题,我们认为基础模型的许多批判性研究都需要与他们的基本社会技术性质相称。
translated by 谷歌翻译